Search results for " Markov Decision Process"

showing 4 items of 4 documents

Expanding the Active Inference Landscape: More Intrinsic Motivations in the Perception-Action Loop

2018

Active inference is an ambitious theory that treats perception, inference and action selection of autonomous agents under the heading of a single principle. It suggests biologically plausible explanations for many cognitive phenomena, including consciousness. In active inference, action selection is driven by an objective function that evaluates possible future actions with respect to current, inferred beliefs about the world. Active inference at its core is independent from extrinsic rewards, resulting in a high level of robustness across e.g.\ different environments or agent morphologies. In the literature, paradigms that share this independence have been summarised under the notion of in…

FOS: Computer and information sciencesComputer scienceComputer Science - Artificial Intelligencepredictive informationBiomedical EngineeringInferenceSystems and Control (eess.SY)02 engineering and technologyAction selectionI.2.0; I.2.6; I.5.0; I.5.1lcsh:RC321-57103 medical and health sciences0302 clinical medicineactive inferenceArtificial IntelligenceFOS: Electrical engineering electronic engineering information engineering0202 electrical engineering electronic engineering information engineeringFormal concept analysisMethodsperception-action loopuniversal reinforcement learningintrinsic motivationlcsh:Neurosciences. Biological psychiatry. NeuropsychiatryFree energy principleCognitive scienceRobotics and AII.5.0I.5.1I.2.6Partially observable Markov decision processI.2.0Artificial Intelligence (cs.AI)Action (philosophy)empowermentIndependence (mathematical logic)free energy principleComputer Science - Systems and Control020201 artificial intelligence & image processingBiological plausibility62F15 91B06030217 neurology & neurosurgeryvariational inference
researchProduct

Sequence Q-learning: A memory-based method towards solving POMDP

2015

Partially observable Markov decision process (POMDP) models a control problem, where states are only partially observable by an agent. The two main approaches to solve such tasks are these of value function and direct search in policy space. This paper introduces the Sequence Q-learning method which extends the well known Q-learning algorithm towards the ability to solve POMDPs through adding a special sequence management framework by advancing from action values to “sequence” values and including the “sequence continuity principle”.

SequenceComputer sciencebusiness.industryQ-learningPartially observable Markov decision processMarkov processContext (language use)Markov modelsymbols.namesakeBellman equationsymbolsArtificial intelligenceMarkov decision processbusiness2015 20th International Conference on Methods and Models in Automation and Robotics (MMAR)
researchProduct

A Cognitive Dialogue Manager for Education Purposes

2011

A conversational agent is a software system that is able to interact with users in a natural way, and often uses natural language capabilities. In this chapter, an evolution of a conversational agent is presented according to the definition of dialogue management techniques for the conversational agents. The presented conversational agent is intended to act as a part of an educational system. The chapter outlines the state-of-the-art systems and techniques for dialogue management in cognitive educational systems, and the underlying psychological and social aspects. We present our framework for a dialogue manager aimed to reduce the uncertainty in users’ sentences during the assessment of hi…

Settore ING-INF/05 - Sistemi Di Elaborazione Delle InformazioniKnowledge managementOntologybusiness.industryLatent semantic analysisSemantic spacePartially observable Markov decision processCognitionPOMDPOntology (information science)computer.software_genreChatbotWorld Wide WebSemantic SpaceSemantic integrationPOS TaggingbusinessPsychologyLatent Semantic AnalysiOntology MappingcomputerChatbotOWL
researchProduct

Comprehensive Uncertainty Management in MDPs

2013

Multistage decision-making in robots involved in real-world tasks is a process affected by uncertainty. The effects of the agent’s actions in a physical en- vironment cannot be always predicted deterministically and in a precise manner. Moreover, observing the environment can be a too onerous for a robot, hence not continuos. Markov Decision Processes (MDPs) are a well-known solution inspired to the classic probabilistic approach for managing uncertainty. On the other hand, including fuzzy logics and possibility theory has widened uncertainty representa- tion. Probability, possibility, fuzzy logics, and epistemic belief allow treating dif- ferent and not always superimposable facets of unce…

Settore ING-INF/05 - Sistemi Di Elaborazione Delle Informazionibusiness.industryProcess (engineering)Probabilistic logicFuzzy logicPossibility distributionUncertainty representationuncertainty Markov Decision Process believabilityRobotArtificial intelligenceMarkov decision processbusinessPossibility theoryMathematics
researchProduct